logic expression
- North America > United States > Florida > Duval County > Jacksonville (0.05)
- Asia > Afghanistan (0.05)
- North America > United States > Florida > Duval County > Jacksonville (0.05)
- Asia > Afghanistan (0.05)
Subgroup Analysis via Model-based Rule Forest
Cheng, I-Ling, Hsu, Chan, Ku, Chantung, Lee, Pei-Ju, Kang, Yihuang
Machine learning models are often criticized for their black-box nature, raising concerns about their applicability in critical decision-making scenarios. Consequently, there is a growing demand for interpretable models in such contexts. In this study, we introduce Model-based Deep Rule Forests (mobDRF), an interpretable representation learning algorithm designed to extract transparent models from data. By leveraging IF-THEN rules with multi-level logic expressions, mobDRF enhances the interpretability of existing models without compromising accuracy. We apply mobDRF to identify key risk factors for cognitive decline in an elderly population, demonstrating its effectiveness in subgroup analysis and local model optimization. Our method offers a promising solution for developing trustworthy and interpretable machine learning models, particularly valuable in fields like healthcare, where understanding differential effects across patient subgroups can lead to more personalized and effective treatments.
- Europe > Austria > Vienna (0.14)
- Asia > Taiwan > Takao Province > Kaohsiung (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Research Report > Promising Solution (0.48)
- Research Report > New Finding (0.34)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.35)
- Health & Medicine > Therapeutic Area > Neurology (0.35)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.95)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (0.89)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)
Logic interpretations of ANN partition cells
Consider a binary classification problem solved using a feed-forward artificial neural network (ANN). Let the ANN be composed of a ReLU layer and several linear layers (convolution, sum-pooling, or fully connected). We assume the network was trained with high accuracy. Despite numerous suggested approaches, interpreting an artificial neural network remains challenging for humans. For a new method of interpretation, we construct a bridge between a simple ANN and logic. As a result, we can analyze and manipulate the semantics of an ANN using the powerful tool set of logic. To achieve this, we decompose the input space of the ANN into several network partition cells. Each network partition cell represents a linear combination that maps input values to a classifying output value. For interpreting the linear map of a partition cell using logic expressions, we suggest minterm values as the input of a simple ANN. We derive logic expressions representing interaction patterns for separating objects classified as 1 from those classified as 0. To facilitate an interpretation of logic expressions, we present them as binary logic trees.
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- Europe > Italy > Piedmont > Turin Province > Turin (0.04)
- Europe > Hungary > Budapest > Budapest (0.04)
- (2 more...)
Neuro-Symbolic Recommendation Model based on Logic Query
Wu, Maonian, Chen, Bang, Zhu, Shaojun, Zheng, Bo, Peng, Wei, Zhang, Mingyi
A recommendation system assists users in finding items that are relevant to them. Existing recommendation models are primarily based on predicting relationships between users and items and use complex matching models or incorporate extensive external information to capture association patterns in data. However, recommendation is not only a problem of inductive statistics using data; it is also a cognitive task of reasoning decisions based on knowledge extracted from information. Hence, a logic system could naturally be incorporated for the reasoning in a recommendation task. However, although hard-rule approaches based on logic systems can provide powerful reasoning ability, they struggle to cope with inconsistent and incomplete knowledge in real-world tasks, especially for complex tasks such as recommendation. Therefore, in this paper, we propose a neuro-symbolic recommendation model, which transforms the user history interactions into a logic expression and then transforms the recommendation prediction into a query task based on this logic expression. The logic expressions are then computed based on the modular logic operations of the neural network. We also construct an implicit logic encoder to reasonably reduce the complexity of the logic computation. Finally, a user's interest items can be queried in the vector space based on the computation results. Experiments on three well-known datasets verified that our method performs better compared to state of the art shallow, deep, session, and reasoning models.
Neural-Symbolic Recommendation with Graph-Enhanced Information
Chen, Bang, Peng, Wei, Wu, Maonian, Zheng, Bo, Zhu, Shaojun
The recommendation system is not only a problem of inductive statistics from data but also a cognitive task that requires reasoning ability. The most advanced graph neural networks have been widely used in recommendation systems because they can capture implicit structured information from graph-structured data. However, like most neural network algorithms, they only learn matching patterns from a perception perspective. Some researchers use user behavior for logic reasoning to achieve recommendation prediction from the perspective of cognitive reasoning, but this kind of reasoning is a local one and ignores implicit information on a global scale. In this work, we combine the advantages of graph neural networks and propositional logic operations to construct a neuro-symbolic recommendation model with both global implicit reasoning ability and local explicit logic reasoning ability. We first build an item-item graph based on the principle of adjacent interaction and use graph neural networks to capture implicit information in global data. Then we transform user behavior into propositional logic expressions to achieve recommendations from the perspective of cognitive reasoning.
- Asia > China > Zhejiang Province (0.04)
- Asia > China > Guizhou Province > Guiyang (0.04)
Neural Logic Reasoning
Shi, Shaoyun, Chen, Hanxiong, Ma, Weizhi, Mao, Jiaxin, Zhang, Min, Zhang, Yongfeng
Recent years have witnessed the success of deep neural networks in many research areas. The fundamental idea behind the design of most neural networks is to learn similarity patterns from data for prediction and inference, which lacks the ability of cognitive reasoning. However, the concrete ability of reasoning is critical to many theoretical and practical problems. On the other hand, traditional symbolic reasoning methods do well in making logical inference, but they are mostly hard rule-based reasoning, which limits their generalization ability to different tasks since difference tasks may require different rules. Both reasoning and generalization ability are important for prediction tasks such as recommender systems, where reasoning provides strong connection between user history and target items for accurate prediction, and generalization helps the model to draw a robust user portrait over noisy inputs. In this paper, we propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning. LINN is a dynamic neural architecture that builds the computational graph according to input logical expressions. It learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference. Experiments on theoretical task show that LINN achieves significant performance on solving logical equations and variables. Furthermore, we test our approach on the practical task of recommendation by formulating the task into a logical inference problem. Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation, which verifies the potential of LINN in practice.
- Asia > China > Beijing > Beijing (0.04)
- Europe > Ireland (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
Rejoinder for the discussion of the paper "A novel algorithmic approach to Bayesian Logic Regression"
Hubin, Aliaksandr, Storvik, Geir, Frommlet, Florian
We would like to begin this rejoinder with expressing our sincere gratitude to all of the discussants for their interesting and thought-provoking comments and remarks. We also feel heartily thankful to the editorial board of Bayesian Analysis for giving us the opportunity to publish our paper entitled "A novel algorithmic approach to Bayesian logic regression" (Hubin et al., 2020a) as a discussion article. Logic regression is a tool to model nonlinear relationships between binary covariates and some response variable by constructing predictors as Boolean combinations. The number of possible logic expressions grows exponentially with the number of binary variables involved, making the model search significantly harder with the increasing complexity of Boolean combinations. Due to Boolean equivalence, it is in fact almost impossible to specify the full model space a priori even for a relatively small number of covariates.
- Information Technology > Modeling & Simulation (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
Neural Logic Networks
Shi, Shaoyun, Chen, Hanxiong, Zhang, Min, Zhang, Yongfeng
Recent years have witnessed the great success of deep neural networks in many research areas. The fundamental idea behind the design of most neural networks is to learn similarity patterns from data for prediction and inference, which lacks the ability of logical reasoning. However, the concrete ability of logical reasoning is critical to many theoretical and practical problems. In this paper, we propose Neural Logic Network (NLN), which is a dynamic neural architecture that builds the computational graph according to input logical expressions. It learns basic logical operations as neural modules, and conducts propositional logical reasoning through the network for inference. Experiments on simulated data show that NLN achieves significant performance on solving logical equations. Further experiments on real-world data show that NLN significantly outperforms state-of-the-art models on collaborative filtering and personalized recommendation tasks.
- North America > United States (0.04)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- Asia > China > Beijing > Beijing (0.04)